High-Level Fusion of Depth and Intensity for Pedestrian Classification
نویسندگان
چکیده
This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local receptive fields) directly from (dense) depth and intensity images. Both modalities are represented in terms of individual feature spaces, in each of which a discriminative model is learned to distinguish between pedestrians and non-pedestrians. We refrain from the construction of a joint feature space, but instead employ a high-level fusion of depth and intensity at classifier-level. Our experiments on a large real-world dataset demonstrate a significant performance improvement of the combined intensity-depth representation over depth-only and intensity-only models (factor four reduction in false positives at comparable detection rates). Moreover, high-level fusion outperforms low-level fusion using a joint feature space approach.
منابع مشابه
Fusion of Stereo Vision for Pedestrian Recognition using Convolutional Neural Networks
Pedestrian detection is a highly debated issue in the scientific community due to its outstanding importance for a large number of applications, especially in the fields of automotive safety, robotics and surveillance. In spite of the widely varying methods developed in recent years, pedestrian detection is still an open challenge whose accuracy and robustness has to be improved. Therefore, in ...
متن کاملAn Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup
The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datas...
متن کاملChange Detection in Urban Area Using Decision Level Fusion of Change Maps Extracted from Optic and SAR Images
The last few decades witnessed high urban growth rates in many countries. Urban growth can be mapped and measured by using remote sensing data and techniques along with several statistical measures. The purpose of this research is to detect the urban change that is used for urban planning. Change detection using remote sensing images can be classified into three methods: algebra-based, transfor...
متن کاملUrban Vegetation Recognition Based on the Decision Level Fusion of Hyperspectral and Lidar Data
Introduction: Information about vegetation cover and their health has always been interesting to ecologists due to its importance in terms of habitat, energy production and other important characteristics of plants on the earth planet. Nowadays, developments in remote sensing technologies caused more remotely sensed data accessible to researchers. The combination of these data improves the obje...
متن کاملFeature-based Multisensor Fusion Using Bayes Rule for Pedestrian Classification in a Dynamic Environment
This paper describes how multisensor data fusion increases reliability of pedestrian classification while using a Bayesian approach. The proposed approach fuses information provided by a laser range scanner and a monocular grey-level camera. Fusion is applied at feature level by using sets of related features and possibly correlation sensor observations. The clue is to combine in a probabilisti...
متن کامل